摘要 :
Better surface quality of every component is essential to perform proper functioning and to reduce the losses ofwear and tear in the moving machine parts. Traditional processes do not use for finishing of soft or brittlematerial, ...
展开
Better surface quality of every component is essential to perform proper functioning and to reduce the losses ofwear and tear in the moving machine parts. Traditional processes do not use for finishing of soft or brittlematerial, due to high abrading force it produces poorer surface. In a competitive business environment,manufacturing industries are facing challenges in the machining of ceramics, super alloys and composites whichrequires a high precision level and superior surface quality with optimum machining cost. Advanced finishingprocesses are capable enough for precise finishing of soft or hard materials without damaging its surface orsubsurface. This paper presents a comprehensive literature review the state of the art technology of highperformance surface finishing processes used in manufacturing industries. The processes are divided into threemain categories: traditional, advanced and hybrid finishing processes. A limited research has been reported onadvanced processes in which finishing force can be easily controlled externally by magnetic field or any othermeans. A finishing medium is used in all these processes called magnetorheological fluid, which is a mixture ofabrasive particles, CIPs, binder and additives. Rheological properties of magnetorheological fluid can becontrolled externally by varying magnetic field. Due to control of force acting on an abrasive particles andmaterial removal rate also controlled. Finally, the drawbacks and missing aspects of the related literature arehighlighted and a list of potential issues for future research directions is recommended.
收起
摘要 :
Recently, Graph Neural Networks (GNNs) have been applied for scheduling jobs over clusters, achieving better performance than hand-crafted heuristics. Despite their impressive performance, concerns remain over whether these GNN-ba...
展开
Recently, Graph Neural Networks (GNNs) have been applied for scheduling jobs over clusters, achieving better performance than hand-crafted heuristics. Despite their impressive performance, concerns remain over whether these GNN-based job schedulers meet users’ expectations about other important properties, such as strategy-proofness, sharing incentive, and stability. In this work, we consider formal verification of GNN-based job schedulers. We address several domain-specific challenges such as networks that are deeper and specifications that are richer than those encountered when verifying image and NLP classifiers. We develop vegas, the first general framework for verifying both single-step and multi-step properties of these schedulers based on carefully designed algorithms that combine ions, refinements, solvers, and proof transfer. Our experimental results show that vegas achieves significant speed-up when verifying important properties of a state-of-the-art GNN-based scheduler compared to previous methods.
收起
摘要 :
We present a novel method for scalable and precise certification of deep neural networks. The key technical insight behind our approach is a new abstract domain which combines floating point polyhedra with intervals and is equippe...
展开
We present a novel method for scalable and precise certification of deep neural networks. The key technical insight behind our approach is a new abstract domain which combines floating point polyhedra with intervals and is equipped with abstract transformers specifically tailored to the setting of neural networks. Concretely, we introduce new transformers for affine transforms, the rectified linear unit (ReLU), sigmoid, tanh, and maxpool functions.We implemented our method in a system called DeepPoly and evaluated it extensively on a range of datasets, neural architectures (including defended networks), and specifications. Our experimental results indicate that DeepPoly is more precise than prior work while scaling to large networks.We also show how to combine DeepPoly with a form of abstraction refinement based on trace partitioning. This enables us to prove, for the first time, the robustness of the network when the input image is subjected to complex perturbations such as rotations that employ linear interpolation.
收起
摘要 :
We present Pasado, a technique for synthesizing precise static analyzers for Automatic Differentiation. Our technique allows one to automatically construct a static analyzer specialized for the Chain Rule, Product Rule, and Quotie...
展开
We present Pasado, a technique for synthesizing precise static analyzers for Automatic Differentiation. Our technique allows one to automatically construct a static analyzer specialized for the Chain Rule, Product Rule, and Quotient Rule computations for Automatic Differentiation in a way that abstracts all of the nonlinear operations of each respective rule simultaneously. By directly synthesizing an abstract transformer for the composite expressions of these 3 most common rules of AD, we are able to obtain significant precision improvement compared to prior works which compose standard abstract transformers together suboptimally. We prove our synthesized static analyzers sound and additionally demonstrate the generality of our approach by instantiating these AD static analyzers with different nonlinear functions, different abstract domains (both intervals and zonotopes) and both forward-mode and reverse-mode AD. We evaluate Pasado on multiple case studies, namely soundly computing bounds on a neural network’s local Lipschitz constant, soundly bounding the sensitivities of financial models, certifying monotonicity, and lastly, bounding sensitivities of the solutions of differential equations from climate science and chemistry for verified ranges of initial conditions and parameters. The local Lipschitz constants computed by Pasado on our largest CNN are up to 2750× more precise compared to the existing state-of-the-art zonotope analysis. The bounds obtained on the sensitivities of the climate, chemical, and financial differential equation solutions are between 1.31 ? 2.81× more precise (on average) compared to a state-of-the-art zonotope analysis.
收起
摘要 :
We present a novel, general construction to ly interpret higher-order automatic differentiation (AD). Our construction allows one to instantiate an interpreter for computing derivatives up to a chosen order. Furthermore, since our...
展开
We present a novel, general construction to ly interpret higher-order automatic differentiation (AD). Our construction allows one to instantiate an interpreter for computing derivatives up to a chosen order. Furthermore, since our construction reduces the problem of ly reasoning about derivatives to ly reasoning about real-valued straight-line programs, it can be instantiated with almost any numerical domain, both relational and non-relational. We formally establish the soundness of this construction. We implement our technique by instantiating our construction with both the non-relational interval domain and the relational zonotope domain to compute both first and higher-order derivatives. In the latter case, we are the first to apply a relational domain to automatic differentiation for ing higher-order derivatives, and hence we are also the first interpretation work to track correlations across not only different variables, but different orders of derivatives. We evaluate these instantiations on multiple case studies, namely robustly explaining a neural network and more precisely computing a neural network’s Lipschitz constant. For robust interpretation, first and second derivatives computed via zonotope AD are up to 4.76× and 6.98× more precise, respectively, compared to interval AD. For Lipschitz certification, we obtain bounds that are up to 11,850× more precise with zonotopes, compared to the state-of-the-art interval-based tool.
收起
摘要 :
Numerical abstract domains such as Polyhedra, Octahedron, Octagon, Interval, and others are an essential component of static program analysis. The choice of domain offers a performance/precision tradeoff ranging from cheap and imp...
展开
Numerical abstract domains such as Polyhedra, Octahedron, Octagon, Interval, and others are an essential component of static program analysis. The choice of domain offers a performance/precision tradeoff ranging from cheap and imprecise (Interval) to expensive and precise (Polyhedra). Recently, significant speedups were achieved for Octagon and Polyhedra by manually decomposing their transformers to work with the Cartesian product of projections associated with partitions of the variable set. While practically useful, this manual process is time consuming, error-prone, and has to be applied from scratch for every domain.In this paper, we present a generic approach for decomposing the transformers of sub-polyhedra domains along with conditions for checking whether the decomposed transformers lose precision with respect to the original transformers. These conditions are satisfied by most practical transformers, thus our approach is suitable for increasing the performance of these transformers without compromising their precision. Furthermore, our approach is ``black box:'' it does not require changes to the internals of the original non-decomposed transformers or additional manual effort per domain.We implemented our approach and applied it to the domains of Zones, Octagon, and Polyhedra. We then compared the performance of the decomposed transformers obtained with our generic method versus the state of the art: the (non-decomposed) PPL for Polyhedra and the much faster ELINA (which uses manual decomposition) for Polyhedra and Octagon. Against ELINA we demonstrate finer partitions and an associated speedup of about 2x on average. Our results indicate that the general construction presented in this work is a viable method for improving the performance of sub-polyhedra domains. It enables designers of abstract domains to benefit from decomposition without re-writing all of their transformers from scratch as required by prior work.
收起
摘要 :
Complete verification of deep neural networks (DNNs) can exactly determine whether the DNN satisfies a desired trustworthy property (e.g., robustness, fairness) on an infinite set of inputs or not. Despite the tremendous progress ...
展开
Complete verification of deep neural networks (DNNs) can exactly determine whether the DNN satisfies a desired trustworthy property (e.g., robustness, fairness) on an infinite set of inputs or not. Despite the tremendous progress to improve the scalability of complete verifiers over the years on individual DNNs, they are inherently inefficient when a deployed DNN is updated to improve its inference speed or accuracy. The inefficiency is because the expensive verifier needs to be run from scratch on the updated DNN. To improve efficiency, we propose a new, general framework for incremental and complete DNN verification based on the design of novel theory, data structure, and algorithms. Our contributions implemented in a tool named IVAN yield an overall geometric mean speedup of 2.4x for verifying challenging MNIST and CIFAR10 classifiers and a geometric mean speedup of 3.8x for the ACAS-XU classifiers over the state-of-the-art baselines.
收起
摘要 :
Abstract This educational review article aims to provide information on the central nervous system (CNS) infectious and parasitic diseases that frequently cause seizures and acquired epilepsy in the developing world. We explain th...
展开
Abstract This educational review article aims to provide information on the central nervous system (CNS) infectious and parasitic diseases that frequently cause seizures and acquired epilepsy in the developing world. We explain the difficulties in defining acute symptomatic seizures, which are common in patients with meningitis, viral encephalitis, malaria, and neurocysticercosis, most of which are associated with increased mortality and morbidity, including subsequent epilepsy. Geographic location determines the common causes of infectious and parasitic diseases in a particular region. Management issues encompass prompt treatment of acute symptomatic seizures and the underlying CNS infection, correction of associated predisposing factors, and decisions regarding the appropriate choice and duration of antiseizure therapy. Although healthcare provider education, to recognize and diagnose seizures and epilepsy related to these diseases, is a feasible objective to save lives, prevention of CNS infections and infestations is the only definitive way forward to reduce the burden of epilepsy in developing countries.
收起
摘要 :
Neurocysticercosis (NCC) is infestation of the human brain by the larva of worm, Taenia solium and is the most prevalent central nervous system (CNS) helminthiasis. The disease is widespread in tropical and subtropical regions of ...
展开
Neurocysticercosis (NCC) is infestation of the human brain by the larva of worm, Taenia solium and is the most prevalent central nervous system (CNS) helminthiasis. The disease is widespread in tropical and subtropical regions of the world, including the Indian subcontinent, China, Sub-Saharan Africa, Central and South America and contributes substantially to the burden of epilepsy in these areas(1) . CNS involvement is seen in 60-90% of systemic cysticercosis. About 2.5 million people worldwide are infected with T. solium, and antibodies to T. solium are seen in up to 25% of people in endemic areas(1-3) . A higher prevalence of epilepsy and seizures in endemic countries is partly because of a high prevalence of cysticercosis in these regions. Seizures are thought to be caused by NCC in as many as 30% of adult patients and in 51% of children in population based endemic regions (2) . About 12% of admissions to neurological services in endemic regions are attributed to NCC and nearly half a million deaths occurring annually worldwide can be attributed directly or indirectly to NCC (Bern et al.). Punctate calcific foci on CT scan are a very common finding in asymptomatic people residing in endemic areas, found in 14-20 % of CT scans. Both seizures and positive cysticercus serology are associated with the detection of cysticerci on CT scans. Seroprevalence using a recently developed CDC- based enzyme-linked immunotransfer blot (EITB) assay is estimated at 8-12% in Latin America and 4.9-24% in Africa and South-East Asia. It is estimated that 20 million people harbour neurocysticercosis worldwide(1) .
收起
摘要 :
The present work shows the first ever implementation of two-order moments conserving finite volume scheme (FVS) for approximating a multidimensional aggregation population balance equations (PBE's) on a structured triangular grid....
展开
The present work shows the first ever implementation of two-order moments conserving finite volume scheme (FVS) for approximating a multidimensional aggregation population balance equations (PBE's) on a structured triangular grid. This scheme is based on preservation of the zeroth and conservation of the first order moments. Our main aim is to demonstrate the ability of the FVS to adapt the structured triangular grid well, hence, improves the accuracy of number density function as well as various order moments. The numerical results obtained by the FVS on a triangular grid are compared with the cell average technique. The comparison is also enhanced to illustrate that the FVS with a triangular grid provides the numerical results with higher precision and at lesser computational time as compared to the FVS with a rectangular grid. Additionally, we also study the mixing state of a bicomponent population of clusters (granules) characterized by the normalized variance of excess solute, x, a parameter that measures the deviation of the composition of each granule from the overall mean. It is shown that the accuracy of the total variance of the excess solute improves when a triangular grid is used in place of a rectangular grid.
收起